9 research outputs found

    Streamlining models with explanations in the learning loop

    Get PDF
    Several explainable AI methods allow a Machine Learning user to get insights on the classification process of a black-box model in the form of local linear explanations. With such information, the user can judge which features are locally relevant for the classification outcome, and get an understanding of how the model reasons. Standard supervised learning processes are purely driven by the original features and target labels, without any feedback loop informed by the local relevance of the features identified by the post-hoc explanations. In this paper, we exploit this newly obtained information to design a feature engineering phase, where we combine explanations with feature values. To do so, we develop two different strategies, named Iterative Dataset Weighting and Targeted Replacement Values, which generate streamlined models that better mimic the explanation process presented to the user. We show how these streamlined models compare to the original black-box classifiers, in terms of accuracy and compactness of the newly produced explanations.Comment: 16 pages, 10 figures, available repositor

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced

    Markov Decision Petri Nets with Uncertainty

    No full text
    Markov Decision Processes (MDPs) are a well known mathematical formalism that combines probabilities with decisions and allows one to compute optimal sequences of decisions, denoted as policies, for fairly large models in many situations. However, the practical application of MDPs is often faced with two problems: the specification of large models in an efficient and understandable way, which has to be combined with algorithms to generate the underlying MDP, and the inherent uncertainty on transition probabilities and rewards, of the resulting MDP. This paper introduces a new graphical formalism, called Markov Decision Petri Net with Uncertainty (MDPNU), that extends the Markov Decision Petri Net (MDPN) formalism, which has been introduced to define MDPs. MDPNUs allow one to specify MDPs where transition probabilities and rewards are defined by intervals rather than constant values. The resulting process is a Bounded Parameter MDP (BMDP). The paper shows how BMDPs are generated from MDPNUs, how analysis methods can be applied and which results can be derived from the models

    Forecast of Distributed Energy Generation and Consumption in a Partially Observable Electrical Grid: A Machine Learning Approach

    No full text
    With a radical energy transition fostered by the increased deployment of renewable non-programmable energy sources over conventional ones, the forecasting of distributed energy production and consumption is becoming a cornerstone to ensure grid security and efficient operational planning. Due to the distributed and fragmented design of such systems, real-time observability of Distributed Generation operations beyond the Transmission System Operator domain is not always granted. In this context, we propose a Machine Learning pipeline for forecasting distributed energy production and consumption in an electrical grid at the HV distribution substation level, where data from distributed generation is partially observable. The proposed methodology is validated on real data for a large Italian region. Results show that the proposed model is able to predict up to 7 days ahead the amount of load and distributed generation (and the net power flux by difference) at each HV distribution substation with a 24%-44% mean gain in out-of-sample accuracy against a non-naive baseline model, paving the way to advanced and more efficient power system management
    corecore